16 research outputs found

    Frequentist validity of Bayesian limits

    Get PDF
    To the frequentist who computes posteriors, not all priors are useful asymptotically: in this paper, a Bayesian perspective on test sequences is proposed and Schwartz's Kullback-Leibler condition is generalised to widen the range of frequentist applications of posterior convergence. With Bayesian tests and a weakened form of contiguity termed remote contiguity, we prove simple and fully general frequentist theorems, for posterior consistency and rates of convergence, for consistency of posterior odds in model selection, and for conversion of sequences of credible sets into sequences of confidence sets with asymptotic coverage one. For frequentist uncertainty quantification, this means that a prior inducing remote contiguity allows one to enlarge credible sets of calculated, simulated or approximated posteriors to obtain asymptotically consistent confidence sets

    Bayesian asymptotics under misspecification

    Get PDF
    Vaart, A.W. van der [Promotor

    Misspecification in infinite-dimensional Bayesian statistics

    Get PDF
    We consider the asymptotic behavior of posterior distributions if the model is misspecified. Given a prior distribution and a random sample from a distribution P0, which may not be in the support of the prior, we show that the posterior concentrates its mass near the points in the support of the prior that minimize the Kullback–Leibler divergence with respect to P0. An entropy condition and a prior-mass condition determine the rate of convergence. The method is applied to several examples, with special interest for infinite-dimensional models. These include Gaussian mixtures, nonparametric regression and parametric models

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    Frequentist validity of Bayesian limits

    No full text
    To the frequentist who computes posteriors, not all priors are useful asymptotically: in this paper, a Bayesian perspective on test sequences is proposed and Schwartz's Kullback-Leibler condition is generalised to widen the range of frequentist applications of posterior convergence. With Bayesian tests and a weakened form of contiguity termed remote contiguity, we prove simple and fully general frequentist theorems, for posterior consistency and rates of convergence, for consistency of posterior odds in model selection, and for conversion of sequences of credible sets into sequences of confidence sets with asymptotic coverage one. For frequentist uncertainty quantification, this means that a prior inducing remote contiguity allows one to enlarge credible sets of calculated, simulated or approximated posteriors to obtain asymptotically consistent confidence sets
    corecore